Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sensors (Basel) ; 21(20)2021 Oct 09.
Artigo em Inglês | MEDLINE | ID: mdl-34695919

RESUMO

In agriculture, explainable deep neural networks (DNNs) can be used to pinpoint the discriminative part of weeds for an imagery classification task, albeit at a low resolution, to control the weed population. This paper proposes the use of a multi-layer attention procedure based on a transformer combined with a fusion rule to present an interpretation of the DNN decision through a high-resolution attention map. The fusion rule is a weighted average method that is used to combine attention maps from different layers based on saliency. Attention maps with an explanation for why a weed is or is not classified as a certain class help agronomists to shape the high-resolution weed identification keys (WIK) that the model perceives. The model is trained and evaluated on two agricultural datasets that contain plants grown under different conditions: the Plant Seedlings Dataset (PSD) and the Open Plant Phenotyping Dataset (OPPD). The model represents attention maps with highlighted requirements and information about misclassification to enable cross-dataset evaluations. State-of-the-art comparisons represent classification developments after applying attention maps. Average accuracies of 95.42% and 96% are gained for the negative and positive explanations of the PSD test sets, respectively. In OPPD evaluations, accuracies of 97.78% and 97.83% are obtained for negative and positive explanations, respectively. The visual comparison between attention maps also shows high-resolution information.


Assuntos
Atenção , Redes Neurais de Computação , Agricultura , Plantas Daninhas , Plântula
2.
Sensors (Basel) ; 21(1)2020 Dec 29.
Artigo em Inglês | MEDLINE | ID: mdl-33383904

RESUMO

Crop mixtures are often beneficial in crop rotations to enhance resource utilization and yield stability. While targeted management, dependent on the local species composition, has the potential to increase the crop value, it comes at a higher expense in terms of field surveys. As fine-grained species distribution mapping of within-field variation is typically unfeasible, the potential of targeted management remains an open research area. In this work, we propose a new method for determining the biomass species composition from high resolution color images using a DeepLabv3+ based convolutional neural network. Data collection has been performed at four separate experimental plot trial sites over three growing seasons. The method is thoroughly evaluated by predicting the biomass composition of different grass clover mixtures using only an image of the canopy. With a relative biomass clover content prediction of R2 = 0.91, we present new state-of-the-art results across the largely varying sites. Combining the algorithm with an all terrain vehicle (ATV)-mounted image acquisition system, we demonstrate a feasible method for robust coverage and species distribution mapping of 225 ha of mixed crops at a median capacity of 17 ha per hour at 173 images per hectare.

3.
Sensors (Basel) ; 18(5)2018 May 16.
Artigo em Inglês | MEDLINE | ID: mdl-29772666

RESUMO

This study outlines a new method of automatically estimating weed species and growth stages (from cotyledon until eight leaves are visible) of in situ images covering 18 weed species or families. Images of weeds growing within a variety of crops were gathered across variable environmental conditions with regards to soil types, resolution and light settings. Then, 9649 of these images were used for training the computer, which automatically divided the weeds into nine growth classes. The performance of this proposed convolutional neural network approach was evaluated on a further set of 2516 images, which also varied in term of crop, soil type, image resolution and light conditions. The overall performance of this approach achieved a maximum accuracy of 78% for identifying Polygonum spp. and a minimum accuracy of 46% for blackgrass. In addition, it achieved an average 70% accuracy rate in estimating the number of leaves and 96% accuracy when accepting a deviation of two leaves. These results show that this new method of using deep convolutional neural networks has a relatively high ability to estimate early growth stages across a wide variety of weed species.


Assuntos
Redes Neurais de Computação , Poaceae/crescimento & desenvolvimento , Polygonum/crescimento & desenvolvimento , Processamento de Imagem Assistida por Computador , Folhas de Planta/anatomia & histologia , Folhas de Planta/fisiologia , Poaceae/anatomia & histologia , Poaceae/fisiologia , Polygonum/anatomia & histologia , Polygonum/fisiologia
4.
Sensors (Basel) ; 17(12)2017 Dec 17.
Artigo em Inglês | MEDLINE | ID: mdl-29258215

RESUMO

Optimal fertilization of clover-grass fields relies on knowledge of the clover and grass fractions. This study shows how knowledge can be obtained by analyzing images collected in fields automatically. A fully convolutional neural network was trained to create a pixel-wise classification of clover, grass, and weeds in red, green, and blue (RGB) images of clover-grass mixtures. The estimated clover fractions of the dry matter from the images were found to be highly correlated with the real clover fractions of the dry matter, making this a cheap and non-destructive way of monitoring clover-grass fields. The network was trained solely on simulated top-down images of clover-grass fields. This enables the network to distinguish clover, grass, and weed pixels in real images. The use of simulated images for training reduces the manual labor to a few hours, as compared to more than 3000 h when all the real images are annotated for training. The network was tested on images with varied clover/grass ratios and achieved an overall pixel classification accuracy of 83.4%, while estimating the dry matter clover fraction with a standard deviation of 7.8%.

5.
Sensors (Basel) ; 17(12)2017 Nov 23.
Artigo em Inglês | MEDLINE | ID: mdl-29168783

RESUMO

A Light Detection and Ranging (LiDAR) sensor mounted on an Unmanned Aerial Vehicle (UAV) can map the overflown environment in point clouds. Mapped canopy heights allow for the estimation of crop biomass in agriculture. The work presented in this paper contributes to sensory UAV setup design for mapping and textual analysis of agricultural fields. LiDAR data are combined with data from Global Navigation Satellite System (GNSS) and Inertial Measurement Unit (IMU) sensors to conduct environment mapping for point clouds. The proposed method facilitates LiDAR recordings in an experimental winter wheat field. Crop height estimates ranging from 0.35-0.58 m are correlated to the applied nitrogen treatments of 0-300 kg N ha . The LiDAR point clouds are recorded, mapped, and analysed using the functionalities of the Robot Operating System (ROS) and the Point Cloud Library (PCL). Crop volume estimation is based on a voxel grid with a spatial resolution of 0.04 × 0.04 × 0.001 m. Two different flight patterns are evaluated at an altitude of 6 m to determine the impacts of the mapped LiDAR measurements on crop volume estimations.

6.
Sensors (Basel) ; 17(11)2017 Nov 09.
Artigo em Inglês | MEDLINE | ID: mdl-29120383

RESUMO

In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360 ∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates.

7.
Sensors (Basel) ; 16(11)2016 Nov 04.
Artigo em Inglês | MEDLINE | ID: mdl-27827908

RESUMO

The stricter legislation within the European Union for the regulation of herbicides that are prone to leaching causes a greater economic burden on the agricultural industry through taxation. Owing to the increased economic burden, research in reducing herbicide usage has been prompted. High-resolution images from digital cameras support the studying of plant characteristics. These images can also be utilized to analyze shape and texture characteristics for weed identification. Instead of detecting weed patches, weed density can be estimated at a sub-patch level, through which even the identification of a single plant is possible. The aim of this study is to adapt the monocot and dicot coverage ratio vision (MoDiCoVi) algorithm to estimate dicotyledon leaf cover, perform grid spraying in real time, and present initial results in terms of potential herbicide savings in maize. The authors designed and executed an automated, large-scale field trial supported by the Armadillo autonomous tool carrier robot. The field trial consisted of 299 maize plots. Half of the plots (parcels) were planned with additional seeded weeds; the other half were planned with naturally occurring weeds. The in-situ evaluation showed that, compared to conventional broadcast spraying, the proposed method can reduce herbicide usage by 65% without measurable loss in biological effect.


Assuntos
Herbicidas/análise , Agricultura , Algoritmos , Produtos Agrícolas/química , Folhas de Planta/química , Zea mays/química
8.
Sensors (Basel) ; 14(8): 13778-93, 2014 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-25196105

RESUMO

In agricultural mowing operations, thousands of animals are injured or killed each year, due to the increased working widths and speeds of agricultural machinery. Detection and recognition of wildlife within the agricultural fields is important to reduce wildlife mortality and, thereby, promote wildlife-friendly farming. The work presented in this paper contributes to the automated detection and classification of animals in thermal imaging. The methods and results are based on top-view images taken manually from a lift to motivate work towards unmanned aerial vehicle-based detection and recognition. Hot objects are detected based on a threshold dynamically adjusted to each frame. For the classification of animals, we propose a novel thermal feature extraction algorithm. For each detected object, a thermal signature is calculated using morphological operations. The thermal signature describes heat characteristics of objects and is partly invariant to translation, rotation, scale and posture. The discrete cosine transform (DCT) is used to parameterize the thermal signature and, thereby, calculate a feature vector, which is used for subsequent classification. Using a k-nearest-neighbor (kNN) classifier, animals are discriminated from non-animals with a balanced classification accuracy of 84.7% in an altitude range of 3-10 m and an accuracy of 75.2% for an altitude range of 10-20 m. To incorporate temporal information in the classification, a tracking algorithm is proposed. Using temporal information improves the balanced classification accuracy to 93.3% in an altitude range 3-10 of meters and 77.7% in an altitude range of 10-20 m.


Assuntos
Animais Selvagens/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Animais , Inteligência Artificial , Análise por Conglomerados
9.
Sensors (Basel) ; 13(5): 5585-602, 2013 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-23624690

RESUMO

The aim of this research is an improvement of plant seedling recognition by two new approaches of shape feature generation based on plant silhouettes. Experiments show that the proposed feature sets possess value in plant recognition when compared with other feature sets. Both methods approximate a distance distribution of an object, either by resampling or by approximation of the distribution with a high degree Legendre polynomial. In the latter case, the polynomial coefficients constitute a feature set. The methods have been tested through a discrimination process where two similar plant species are to be distinguished into their respective classes. The used performance assessment is based on the classification accuracy of 4 different classifiers (a k-Nearest Neighbor, Naive-Bayes, Linear Support Vector Machine, Nonlinear Support Vector Machine). Another set of 21 well-known shape features described in the literature is used for comparison. The used data consisted of 139 samples of cornflower (Centaura cyanus L.) and 63 samples of nightshade (Solanum nigrum L.). The highest discrimination accuracy was achieved with the Legendre Polynomial feature set and amounted to 97.5%. This feature set consisted of 10 numerical values. Another feature set consisting of 21 common features achieved an accuracy of 92.5%. The results suggest that the Legendre Polynomial feature set can compete with or outperform the commonly used feature sets.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...